πŸ•ΈοΈ Ada Research Browser

DEMO_ROADMAP.md
← Back

RCD-CUI Demo Roadmap

A phased implementation plan for demonstrating rcd-cui capabilities to the RCD team and stakeholders.

Goals

  1. Visibility: Provide living artifacts that showcase compliance posture without manual effort
  2. Reproducibility: Enable repeatable demos that work on any developer's machine
  3. Realism: Demonstrate actual HPC/CUI workflows, not just config management
  4. Education: Help team members understand NIST 800-171 through hands-on interaction

Demo Scenarios

All phases build toward supporting these four demonstration scenarios:

Scenario Description Primary Audience
A: Project Onboarding End-to-end CUI project setup PIs, Researchers, Leadership
B: Compliance Drift Detect and remediate configuration drift Sysadmins, CISO
C: Auditor Walkthrough Generate and review audit evidence package CISO, External Auditors
D: Node Lifecycle Provision, validate, and decommission compute nodes Sysadmins, Security Team

Phase 1: CI/CD and Living Dashboard (Immediate)

Objective: Automated validation on every commit with a published compliance dashboard.

Deliverables

  1. GitHub Actions Workflow (.github/workflows/ci.yml)
  2. Trigger on PR: lint, syntax-check, YAML validation
  3. Trigger on merge to main: build EE image, generate docs, publish dashboard
  4. Nightly schedule: full assessment against test inventory

  5. GitHub Pages Dashboard

  6. Published at https://<org>.github.io/rcd-cui/
  7. Auto-updated on each merge to main
  8. Contents:

    • Compliance dashboard (reports/dashboard/index.html)
    • Framework crosswalk (downloadable CSV)
    • Generated documentation (PI guide, researcher quickstart, etc.)
  9. README Badges

  10. CI status badge
  11. SPRS score badge (static initially, dynamic in Phase 3)
  12. Last assessment date badge

  13. Branch Protection Rules

  14. Require CI pass before merge
  15. Require at least one approval

Success Criteria

Supports Scenarios


Phase 2: Local Demo Lab (Short-term)

Objective: Reproducible multi-VM environment for interactive demonstrations.

Deliverables

  1. Vagrant Lab Environment (demo/vagrant/)
  2. Vagrantfile defining 3-4 Rocky Linux 9 VMs:
    • mgmt01: FreeIPA server, Wazuh manager (management zone)
    • login01: Login/submit node (internal zone)
    • compute01, compute02: Compute nodes (restricted zone)
  3. Minimal Slurm cluster (slurmctld on mgmt, slurmd on compute)
  4. Shared storage via NFS (simulating Lustre/BeeGFS)

  5. Demo Orchestration Scripts (demo/scripts/)

  6. demo-setup.sh: Bring up lab, run initial provisioning
  7. demo-reset.sh: Reset to baseline state between demos
  8. demo-break.sh: Introduce compliance violations for Scenario B
  9. demo-fix.sh: Run remediation playbooks

  10. Scenario Playbooks (demo/playbooks/)

  11. scenario-a-onboard.yml: Onboard fictional "Project Helios" with users
  12. scenario-b-drift.yml: Orchestrated break/detect/fix cycle
  13. scenario-c-audit.yml: Generate full auditor package
  14. scenario-d-lifecycle.yml: Add new node, validate, decommission

  15. Demo Narrative Scripts (demo/narratives/)

  16. Markdown guides for each scenario with talking points
  17. Expected outputs and screenshots
  18. Timing estimates for presentations

Lab Specifications

VM vCPUs RAM Disk Role
mgmt01 2 4GB 40GB FreeIPA, Wazuh, Slurmctld
login01 2 2GB 20GB Login node, submit host
compute01 2 2GB 20GB Compute node
compute02 2 2GB 20GB Compute node

Total resources: 8 vCPUs, 10GB RAM (fits on modern laptop with 16GB+)

Success Criteria

Supports Scenarios


Objective: Historical tracking of compliance posture with trend visualization.

Deliverables

  1. Assessment History Storage
  2. JSON files in data/assessment_history/YYYY-MM-DD.json
  3. Git-tracked for audit trail
  4. Schema-validated with Pydantic

  5. SPRS Trend Tracking

  6. Historical SPRS scores stored per assessment
  7. Trend line visualization on dashboard
  8. Per-control family breakdown over time

  9. Enhanced Dashboard

  10. Interactive charts (Chart.js or similar)
  11. Date range selector for historical views
  12. Control-level drill-down
  13. POA&M burndown chart
  14. "Days in current state" metrics

  15. Compliance Alerts

  16. GitHub Actions workflow detects score drops
  17. Creates GitHub Issue for significant regressions
  18. Optional: Slack/Teams webhook notifications

  19. Dynamic README Badge

  20. Shields.io badge pulling from latest assessment
  21. Shows current SPRS score with color coding:
    • Green: 110 (perfect)
    • Yellow: 80-109
    • Red: <80

Data Model

# data/assessment_history/2026-02-15.json
{
  "assessment_date": "2026-02-15T14:30:00Z",
  "sprs_score": 87,
  "controls_assessed": 110,
  "controls_passing": 95,
  "controls_failing": 12,
  "controls_not_applicable": 3,
  "by_family": {
    "AC": {"passing": 18, "failing": 2},
    "AU": {"passing": 12, "failing": 1},
    ...
  },
  "poam_items": 8,
  "execution_environment_version": "1.2.0",
  "commit_sha": "abc123"
}

Success Criteria

Supports Scenarios


Phase 4: Integration Demo Environment (Long-term)

Objective: Production-like environment with full observability stack.

Deliverables

  1. Kubernetes/Podman Compose Stack
  2. Alternative to Vagrant for container-native demos
  3. Faster startup, lower resource usage
  4. Suitable for cloud deployment (workshop environments)

  5. Grafana Integration

  6. Import assessment data as Grafana data source
  7. Pre-built compliance dashboards
  8. Unified view with infrastructure metrics

  9. Wazuh Correlation

  10. Security events linked to compliance controls
  11. Real-time alerting for control violations
  12. Demonstrate SIEM integration for auditors

  13. ServiceNow/Jira Integration

  14. Auto-create tickets for new POA&M items
  15. Link remediation commits to tickets
  16. Demonstrate enterprise workflow integration

  17. Cloud Deployment Option

  18. Terraform modules for AWS/GCP/Azure
  19. Ephemeral demo environments on demand
  20. Cost tracking and auto-teardown

Success Criteria

Supports Scenarios


Implementation Order

Phase 1 ─────────────────────────────────────────────────────────►
         CI/CD Pipeline β”‚ GitHub Pages β”‚ Badges β”‚ Branch Protection
                        β”‚
Phase 2 ────────────────┴────────────────────────────────────────►
         Vagrant Lab β”‚ Demo Scripts β”‚ Scenario Playbooks β”‚ Narratives
                                     β”‚
Phase 3 ─────────────────────────────┴───────────────────────────►
         Assessment History β”‚ Trend Charts β”‚ Alerts β”‚ Dynamic Badges
                                            β”‚
Phase 4 ────────────────────────────────────┴────────────────────►
         Container Stack β”‚ Grafana β”‚ Wazuh β”‚ Cloud Deploy

Speckit Specifications

Each phase should be implemented as a separate speckit specification:

Spec Phase Name Key Deliverables
005 1 CI/CD Pipeline and Living Dashboard GitHub Actions, Pages, badges
006 2 Local Demo Lab Environment Vagrant, demo scripts, narratives
007 3 Compliance Trending and Analytics History storage, charts, alerts
008 4 Integration Demo Environment Containers, Grafana, cloud deploy

Resource Requirements

Development Time (Estimates)

Phase Specification Implementation Testing Total
1 2 hours 4 hours 2 hours 8 hours
2 4 hours 16 hours 8 hours 28 hours
3 3 hours 12 hours 6 hours 21 hours
4 4 hours 20 hours 10 hours 34 hours

Infrastructure

Phase Local Resources Cloud Resources
1 None (CI runs in GitHub) GitHub Actions minutes
2 10GB RAM, 100GB disk None
3 Same as Phase 1 Same as Phase 1
4 16GB RAM or cloud Optional cloud instances

Getting Started

To begin implementation:

  1. Review this roadmap with the RCD team
  2. Prioritize phases based on upcoming demo needs
  3. Create spec 005 for Phase 1 using /speckit.specify
  4. Implement iteratively, demoing progress at each phase

Success Metrics

Metric Target Measurement
Time to first demo < 2 weeks Phase 1 complete
Demo setup time < 10 minutes vagrant up or container start
Dashboard freshness < 1 hour Auto-update on merge
Team adoption 100% All team members can run demo locally
Stakeholder feedback Positive Post-demo surveys